Skip to content

Conversation

red-hat-konflux[bot]
Copy link
Contributor

@red-hat-konflux red-hat-konflux bot commented Aug 16, 2025

This PR contains the following updates:

Package Change Age Confidence
huggingface-hub ==0.34.3 -> ==0.35.3 age confidence

Release Notes

huggingface/huggingface_hub (huggingface-hub)

v0.35.3: [v0.35.3] Fix image-to-image target size parameter mapping & tiny agents allow tools list bug

Compare Source

This release includes two bug fixes:

Full Changelog: huggingface/huggingface_hub@v0.35.2...v0.35.3

v0.35.2: [v0.35.2] Welcoming Z.ai as Inference Providers!

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.35.1...v0.35.2

New inference provider! 🔥

Z.ai is now officially an Inference Provider on the Hub. See full documentation here: https://huggingface.co/docs/inference-providers/providers/zai-org.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="zai-org")
completion = client.chat.completions.create(
    model="zai-org/GLM-4.5",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)

print("\nThinking:")
print(completion.choices[0].message.reasoning_content)
print("\nOutput:")
print(completion.choices[0].message.content)
Thinking:
Okay, the user is asking about the capital of France. That's a pretty straightforward geography question. 

Hmm, I wonder if this is just a casual inquiry or if they need it for something specific like homework or travel planning. The question is very basic though, so probably just general knowledge. 

Paris is definitely the correct answer here. It's been the capital for centuries, since the Capetian dynasty made it the seat of power. Should I mention any historical context? Nah, the user didn't ask for details - just the capital. 

I recall Paris is also France's largest city and major cultural hub. But again, extra info might be overkill unless they follow up. Better keep it simple and accurate. 

The answer should be clear and direct: "Paris". No need to overcomplicate a simple fact. If they want more, they'll ask.

Output:
The capital of France is **Paris**.  

Paris has been the political and cultural center of France for centuries, serving as the seat of government, the residence of the President (Élysée Palace), and home to iconic landmarks like the Eiffel Tower, the Louvre Museum, and Notre-Dame Cathedral. It is also France's largest city and a global hub for art, fashion, gastronomy, and history.

Misc:

v0.35.1: [v0.35.1] Do not retry on 429 and skip forward ref in strict dataclass

Compare Source

  • Do not retry on 429 (only on 5xx) #​3377
  • Skip unresolved forward ref in strict dataclasses #​3376

Full Changelog: huggingface/huggingface_hub@v0.35.0...v0.35.1

v0.35.0: [v0.35.0] Announcing Scheduled Jobs: run cron jobs on GPU on the Hugging Face Hub!

Compare Source

Scheduled Jobs

In v0.34.0 release, we announced Jobs, a new way to run compute on the Hugging Face Hub. In this new release, we are announcing Scheduled Jobs to run Jobs on a regular basic. Think "cron jobs running on GPU".

This comes with a fully-fledge CLI:

hf jobs scheduled run @​hourly ubuntu echo hello world
hf jobs scheduled run "0 * * * *" ubuntu echo hello world
hf jobs scheduled ps -a
hf jobs scheduled inspect <id>
hf jobs scheduled delete <id>
hf jobs scheduled suspend <id>
hf jobs scheduled resume <id>
hf jobs scheduled uv run @&#8203;weekly train.py

It is now possible to run a command with uv run:

hf jobs uv run --with lighteval -s HF_TOKEN lighteval endpoint inference-providers "model_name=openai/gpt-oss-20b,provider=groq" "lighteval|gsm8k|0|0"

Some other improvements have been added to the existing Jobs API for a better UX.

And finally, Jobs documentation has been updated with new examples (and some fixes):

CLI updates

In addition to the Scheduled Jobs, some improvements have been added to the hf CLI.

Inference Providers

Welcome Scaleway and PublicAI!

Two new partners have been integrated to Inference Providers: Scaleway and PublicAI! (as part of releases 0.34.5 and 0.34.6).

Image-to-video

Image to video is now supported in the InferenceClient:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai")

video = client.image_to_video(
    "cat.png",
    prompt="The cat starts to dance",
    model="Wan-AI/Wan2.2-I2V-A14B",
)
Miscellaneous

Header content-type is now correctly set when sending an image or audio request (e.g. for image-to-image task). It is inferred either from the filename or the URL provided by the user. If user is directly passing raw bytes, the content-type header has to be set manually.

  • [InferenceClient] Add content-type header whenever possible + refacto by @​Wauplin in #​3321

A .reasoning field has been added to the Chat Completion output. This is used by some providers to return reasoning tokens separated from the .content stream of tokens.

MCP & tiny-agents updates

tiny-agents now handles AGENTS.md instruction file (see https://agents.md/).

Tools filtering has already been improved to avoid loading non-relevant tools from an MCP server:

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes
🏗️ internal

Community contributions

The following contributors have made changes to the library over the last release. Thank you!

v0.34.6: [v0.34.6]: Welcoming PublicAI as Inference Providers!

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.34.5...v0.34.6

⚡ New provider: PublicAI

[!Tip]
All supported PublicAI models can be found here.

Public AI Inference Utility is a nonprofit, open-source project building products and organizing advocacy to support the work of public AI model builders like the Swiss AI Initiative, AI Singapore, AI Sweden, and the Barcelona Supercomputing Center. Think of a BBC for AI, a public utility for AI, or public libraries for AI.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="publicai")
completion = client.chat.completions.create(
    model="swiss-ai/Apertus-70B-Instruct-2509",
    messages=[{"role": "user", "content": "What is the capital of Switzerland?"}],
)

print(completion.choices[0].message.content)

v0.34.5: [v0.34.5]: Welcoming Scaleway as Inference Providers!

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.34.4...v0.34.5

⚡ New provider: Scaleway

[!Tip]
All supported Scaleway models can be found here. For more details, check out its documentation page.

Scaleway is a European cloud provider, serving latest LLM models through its Generative APIs alongside a complete cloud ecosystem.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="scaleway")

completion = client.chat.completions.create(
    model="Qwen/Qwen3-235B-A22B-Instruct-2507",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

v0.34.4: [v0.34.4] Support Image to Video inference + QoL in jobs API, auth and utilities

Compare Source

Biggest update is the support of Image-To-Video task with inference provider Fal AI

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> video = client.image_to_video("cat.jpg", model="Wan-AI/Wan2.2-I2V-A14B", prompt="turn the cat into a tiger")
>>> with open("tiger.mp4", "wb") as f:
 ...     f.write(video)

And some quality of life improvements:

Full Changelog: huggingface/huggingface_hub@v0.34.3...v0.34.4


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

To execute skipped test pipelines write comment /ok-to-test.

This PR has been generated by MintMaker (powered by Renovate Bot).

@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/main/huggingface-hub-0.x branch from 66a17eb to 69e329f Compare September 20, 2025 08:49
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface-hub to v0.34.4 Update dependency huggingface-hub to v0.35.0 Sep 20, 2025
@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/main/huggingface-hub-0.x branch from 69e329f to 071f0d3 Compare September 24, 2025 16:13
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface-hub to v0.35.0 Update dependency huggingface-hub to v0.35.1 Sep 24, 2025
@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/main/huggingface-hub-0.x branch from 071f0d3 to ce54d90 Compare September 29, 2025 12:14
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface-hub to v0.35.1 Update dependency huggingface-hub to v0.35.2 Sep 29, 2025
Signed-off-by: red-hat-konflux <126015336+red-hat-konflux[bot]@users.noreply.github.com>
@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/main/huggingface-hub-0.x branch from ce54d90 to ffcb9cf Compare September 29, 2025 16:15
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface-hub to v0.35.2 Update dependency huggingface-hub to v0.35.3 Sep 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants